首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12772篇
  免费   1732篇
  国内免费   269篇
耳鼻咽喉   54篇
儿科学   165篇
妇产科学   163篇
基础医学   1113篇
口腔科学   190篇
临床医学   1132篇
内科学   1196篇
皮肤病学   73篇
神经病学   638篇
特种医学   624篇
外科学   897篇
综合类   2119篇
现状与发展   1篇
一般理论   8篇
预防医学   3622篇
眼科学   102篇
药学   1439篇
  9篇
中国医学   752篇
肿瘤学   476篇
  2024年   27篇
  2023年   399篇
  2022年   502篇
  2021年   902篇
  2020年   895篇
  2019年   766篇
  2018年   660篇
  2017年   686篇
  2016年   617篇
  2015年   558篇
  2014年   925篇
  2013年   1058篇
  2012年   605篇
  2011年   708篇
  2010年   535篇
  2009年   500篇
  2008年   546篇
  2007年   537篇
  2006年   510篇
  2005年   426篇
  2004年   380篇
  2003年   321篇
  2002年   270篇
  2001年   186篇
  2000年   139篇
  1999年   119篇
  1998年   97篇
  1997年   86篇
  1996年   71篇
  1995年   58篇
  1994年   75篇
  1993年   66篇
  1992年   51篇
  1991年   57篇
  1990年   46篇
  1989年   51篇
  1988年   39篇
  1987年   49篇
  1986年   35篇
  1985年   32篇
  1984年   35篇
  1983年   20篇
  1982年   26篇
  1981年   14篇
  1980年   25篇
  1979年   8篇
  1978年   9篇
  1977年   8篇
  1976年   11篇
  1974年   8篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
We propose a semiparametric marginal modeling approach for longitudinal analysis of cohorts with data missing due to death and non‐response to estimate regression parameters interpreted as conditioned on being alive. Our proposed method accommodates outcomes and time‐dependent covariates that are missing not at random with non‐monotone missingness patterns via inverse‐probability weighting. Missing covariates are replaced by consistent estimates derived from a simultaneously solved inverse‐probability‐weighted estimating equation. Thus, we utilize data points with the observed outcomes and missing covariates beyond the estimated weights while avoiding numerical methods to integrate over missing covariates. The approach is applied to a cohort of elderly female hip fracture patients to estimate the prevalence of walking disability over time as a function of body composition, inflammation, and age. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
992.
Studies of HIV dynamics in AIDS research are very important in understanding the pathogenesis of HIV‐1 infection and also in assessing the effectiveness of antiviral therapies. Nonlinear mixed‐effects (NLME) models have been used for modeling between‐subject and within‐subject variations in viral load measurements. Mostly, normality of both within‐subject random error and random‐effects is a routine assumption for NLME models, but it may be unrealistic, obscuring important features of between‐subject and within‐subject variations, particularly, if the data exhibit skewness. In this paper, we develop a Bayesian approach to NLME models and relax the normality assumption by considering both model random errors and random‐effects to have a multivariate skew‐normal distribution. The proposed model provides flexibility in capturing a broad range of non‐normal behavior and includes normality as a special case. We use a real data set from an AIDS study to illustrate the proposed approach by comparing various candidate models. We find that the model with skew‐normality provides better fit to the observed data and the corresponding estimates of parameters are significantly different from those based on the model with normality when skewness is present in the data. These findings suggest that it is very important to assume a model with skew‐normal distribution in order to achieve robust and reliable results, in particular, when the data exhibit skewness. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
993.
Specific age‐related hypotheses are tested in population‐based longitudinal studies. At specific time intervals, both the outcomes of interest and the time‐varying covariates are measured. When participants are approached for follow‐up, some participants do not provide data. Investigations may show that many have died before the time of follow‐up whereas others refused to participate. Some of these non‐participants do not provide data at later follow‐ups. Few statistical methods for missing data distinguish between ‘non‐participation’ and ‘death’ among study participants. The augmented inverse probability‐weighted estimators are most commonly used in marginal structure models when data are missing at random. Treating non‐participation and death as the same, however, may lead to biased estimates and invalid inferences. To overcome this limitation, a multiple inverse probability‐weighted approach is presented to account for two types of missing data, non‐participation and death, when using a marginal mean model. Under certain conditions, the multiple weighted estimators are consistent and asymptotically normal. Simulation studies will be used to study the finite sample efficiency of the multiple weighted estimators. The proposed method will be applied to study the risk factors associated with the cognitive decline among the aging adults, using data from the Chicago Health and Aging Project (CHAP). Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
994.
As a geographical cluster detection analysis tool, the spatial scan statistic has been developed for different types of data such as Bernoulli, Poisson, ordinal, exponential and normal. Another interesting data type is multinomial. For example, one may want to find clusters where the disease‐type distribution is statistically significantly different from the rest of the study region when there are different types of disease. In this paper, we propose a spatial scan statistic for such data, which is useful for geographical cluster detection analysis for categorical data without any intrinsic order information. The proposed method is applied to meningitis data consisting of five different disease categories to identify areas with distinct disease‐type patterns in two counties in the U.K. The performance of the method is evaluated through a simulation study. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
995.
Missing data are common in longitudinal studies and can occur in the exposure interest. There has been little work assessing the impact of missing data in marginal structural models (MSMs), which are used to estimate the effect of an exposure history on an outcome when time‐dependent confounding is present. We design a series of simulations based on the Framingham Heart Study data set to investigate the impact of missing data in the primary exposure of interest in a complex, realistic setting. We use a standard application of MSMs to estimate the causal odds ratio of a specific activity history on outcome. We report and discuss the results of four missing data methods, under seven possible missing data structures, including scenarios in which an unmeasured variable predicts missing information. In all missing data structures, we found that a complete case analysis, where all subjects with missing exposure data are removed from the analysis, provided the least bias. An analysis that censored individuals at the first occasion of missing exposure and includes a censorship model as well as a propensity model when creating the inverse probability weights also performed well. The presence of an unmeasured predictor of missing data only slightly increased bias, except in the situation such that the exposure had a large impact on missing data and the unmeasured variable had a large impact on missing data and outcome. A discussion of the results is provided using causal diagrams, showing the usefulness of drawing such diagrams before conducting an analysis. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
996.
Ordinal and quantitative discrete data are frequent in biomedical and neuropsychological studies. We propose a semi‐parametric model for the analysis of the change over time of such data in longitudinal studies. A threshold model is defined where the outcome value depends on the current value of an underlying Gaussian latent process. The latent process model is a Gaussian linear mixed model with a non‐parametric function of time, f(t), to model the expected change over time. This model includes random‐effects and a stochastic error process to flexibly handle correlation between repeated measures. The function f(t) and all the model parameters are estimated by penalized likelihood using a cubic‐spline approximation for f(t). The smoothing parameter is estimated by an approximate cross‐validation criterion. Confidence bands may be computed for the estimated curves for the latent process and, using a Monte Carlo approach, for the outcome in its natural scale. The method is applied to the Paquid cohort data to compare the time‐course over 14 years of two cognitive scores in a sample of 350 future Alzheimer patients and in a matched sample of healthy subjects. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
997.
In non‐inferiority trials that employ the synthesis method several types of dependencies among test statistics occur due to sharing of the same information from the historical trial. The conditions under which the dependencies appear may be divided into three categories. The first case is when a new drug is approved with single non‐inferiority trial. The second case is when a new drug is approved if two independent non‐inferiority trials show positive results. The third case is when two new different drugs are approved with the same active control. The problem of the dependencies is that they can make the type I error rate deviate from the nominal level. In order to study such deviations, we introduce the unconditional and conditional across‐trial type I error rates when the non‐inferiority margin is estimated from the historical trial, and investigate how the dependencies affect the type I error rates. We show that the unconditional across‐trial type I error rate increases dramatically as does the correlation between two non‐inferiority tests when a new drug is approved based on the positive results of two non‐inferiority trials. We conclude that the conditional across‐trial type I error rate involves the unknown treatment effect in the historical trial. The formulae of the conditional across‐trial type I error rates provide us with a way of investigating the conditional across‐trial type I error rates for various assumed values of the treatment effect in the historical trial. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
998.
Comparing two samples with a continuous non‐negative score, e.g. a utility score over [0, 1], with a substantial proportion, say 50 per cent, scoring 0 presents distributional problems for most standard tests. A Wilcoxon rank test can be used, but the large number of ties reduces power. I propose a new test, the Wilcoxon rank‐sum test performed after removing an equal (and maximal) number of 0's from each sample. This test recovers much of the power. Compared with a (directional) modification of a two‐part test proposed by Lachenbruch, the truncated Wilcoxon has similar power when the non‐zero scores are independent of the proportion of zeros, but, unlike the two‐part test, the truncated Wilcoxon is relatively unaffected when these processes are dependent. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
999.
结合我院开发、实施数据挖掘项目的情况,分析当前在大型医院开展数据挖掘项目中存在的需求分析、数据来源、数据质量、技术人员管理等问题,并对解决问题的对策进行了探讨。  相似文献   
1000.
数据仓库和数据挖掘技术是信息技术领域的新兴技术,而如何应用到医院的信息化建设中是医院信息系统(HIS)面临的问题。建立基于HIS的数据仓库并使用数据挖掘技术,可以将大量源数据有效地转化为有用的知识信息,并服务于决策过程。本文结合医院医疗业务提出了系统实现的解决方案,并给出了基于医院医疗业务数据仓库的体系结构和逻辑模型,以及基于医院医疗业务的数据挖掘技术应用方法。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号